230 research outputs found

    Extracting Formal Models from Normative Texts

    Full text link
    We are concerned with the analysis of normative texts - documents based on the deontic notions of obligation, permission, and prohibition. Our goal is to make queries about these notions and verify that a text satisfies certain properties concerning causality of actions and timing constraints. This requires taking the original text and building a representation (model) of it in a formal language, in our case the C-O Diagram formalism. We present an experimental, semi-automatic aid that helps to bridge the gap between a normative text in natural language and its C-O Diagram representation. Our approach consists of using dependency structures obtained from the state-of-the-art Stanford Parser, and applying our own rules and heuristics in order to extract the relevant components. The result is a tabular data structure where each sentence is split into suitable fields, which can then be converted into a C-O Diagram. The process is not fully automatic however, and some post-editing is generally required of the user. We apply our tool and perform experiments on documents from different domains, and report an initial evaluation of the accuracy and feasibility of our approach.Comment: Extended version of conference paper at the 21st International Conference on Applications of Natural Language to Information Systems (NLDB 2016). arXiv admin note: substantial text overlap with arXiv:1607.0148

    The Impact of the Suspension of Opening and Closing Call

    Get PDF
    A hotly debated issue in the market microstructure literature is the effectiveness of call auctions as against continuous trading systems. In this paper we investigate this issue by studying the impact of the suspension of opening and closing call auctions by the National Stock Exchange of India in 1999. We compare the volatility, efficiency and liquidity (VEL) of securities in the market before and after suspension, and estimate the value of the auctions to traders by carrying out an event study. Contrary to expectation, we find that VEL factors improved following the suspension, and the CARs were significant but were not uniformly positive or negative. As a partial explanation for these results, we find that less liquid stocks traded less in the auctions than did other securities, especially at the opening, and they experienced the most gains following the suspension. This suggests that less liquid stocks did not gain the expected benefits from the auctions, and therefore that it cannot be assumed that a call auction system will improve share trading in a less liquid emerging market. Future research in this area will need to pay attention to the composition of the shares being traded and to the nature of the trading process in different shares in the market.Call Auctions, stock markets, National Stock Exchange of India

    An Analysis of the Impacts of Non-Synchronous Trading On

    Get PDF
    The serial correlation effects which non-synchronous trading can induce in financial data have been documented by various researchers. In this paper we investigate non-synchronous trading effects in terms of the predictability that may be induced in the values of stock indices. This analysis is applied to emerging-market data, on the grounds that such markets might be less liquid and thus prone to a higher degree of non- synchronous trading. We use both a daily data set and a higher frequency one, since the latter is a prerequisite for capturing intra-day variations in trading activity. When considering one-minute interval data, we obtain clear evidence of predictability between indices with different degrees of non-synchronous trading. We then propose a simple test to infer whether such predictability is mainly attributable to non- synchronous trading or an actual delayed adjustment on part of traders. The results obtained from an intra-day analysis suggest that the former cause seems a better explanation for the observed predictability. Future research in this area is needed to shed light on the degree of data predictability which may be exclusively attributed to non-synchronous trading, and how empirical results may be influenced by the chosen data frequency.Non-Synchronous Trading, Stock Markets, National Stock Exchange of India, High-Frequency Data.

    Stock market predictability : non-synchronous trading or inefficient markets? Evidence from the National Stock Exchange of India

    Get PDF
    Purpose: The main objective of this study is to obtain new empirical evidence on non-synchronous trading effects through modelling the predictability of market indices. Design / Methodology / Approach: We test for lead-lag effects between the Indian Nifty and Nifty Junior indices using Pesaran-Timmermann tests and Granger-Causality. We then propose a simple test on overnight returns, in order to infer whether the observed predictability is mainly attributable to non-synchronous trading or some form of inefficiency. Findings: The evidence suggests that non-synchronous trading is a better explanation for the observed predictability in the Indian stock market. Research limitations / implications: The indication that non-synchronous trading effects become more pronounced in high-frequency data, suggests that prior studies using daily data may underestimate the impacts of non-synchronicity. Originality / value: The originality of the paper rests on various important contributions: (a) we look at overnight returns to infer whether predictability is more attributable to non-synchronous trading or to some form of inefficiency, (b) we investigate the impacts of non-synchronicity in terms of lead-lag effects rather than serial correlation, and (c) we use high-frequency data which gauges the impacts of non-synchronicity during less active parts of the trading day.peer-reviewe

    The impact of the suspension of opening and closing call auctions : evidence from the National Stock Exchange of India

    Get PDF
    We study the impact of the suspension of opening and closing call auctions by the National Stock Exchange of India in 1999. We compare volatility, efficiency and liquidity (VEL) of securities before and after suspension, and estimate the value of the auctions using an event study. Following suspension, VEL improved and the CARs were significant but not uniformly positive or negative. Also, less liquid stocks traded less in the auctions than other securities, especially at the opening, and they experienced gains following suspension. This is consistent with there being liquidity externalities associated with auctions, as appears to be the case in some industrial countries. We conclude that opening and closing call auctions may not necessarily improve share trading in a less liquid emerging market.peer-reviewe

    An IDE for the Grammatical Framework

    Get PDF
    Abstract The GF Eclipse Plugin provides an integrated development environment (IDE) for developing grammars in the Grammatical Framework (GF). Built on top of the Eclipse Platform, it aids grammar writing by providing instant syntax checking, semantic warnings and crossreference resolution. Inline documentation and a library browser facilitate the use of existing resource libraries, and compilation and testing of grammars is greatly improved through single-click launch configurations and an in-built test case manager for running treebank regression tests. This IDE promotes grammar-based systems by making the tasks of writing grammars and using resource libraries more efficient, and provides powerful tools to reduce the barrier to entry to GF and encourage new users of the framework. The research leading to these results has received funding from the European Union's Seventh Framework Programme (FP7/2007(FP7/ -2013 under grant agreement no. FP7-ICT-247914. Introduction Grammatical Framework (GF) GF is a special-purpose framework for writing multilingual grammars targeting multiple parallel languages simultaneously. It provides a functional programming language for declarative grammar writing, where each grammar is split between an abstract syntax common to all languages, and multiple language-dependent concrete syntaxes, which define how abstract syntax 2 / John J. Camilleri trees should be linearised into the target languages. From these grammar components, the GF compiler derives both a parser and a lineariser for each concrete language, enabling bi-directional translation between all language pairs. GF grammar development As a grammar formalism, GF facilitates the writing of grammars which can form the basis of various kinds of rule-based machine translation applications. While it is common to focus on the theoretical capabilities and characteristics of such formalisms, it is also relevant to assess what software engineering tools exist to aid the grammar writers themselves. The process of writing a GF grammar may be constrained by the framework's formal limits, but its effectiveness and endurance as a language for grammar development is equally determined by the real-world tools which exist to support it. Whether out of developer choice or merely lack of anything better, GF grammar development typically takes place in traditional text editors, which have no special support for GF apart from a few syntax highlighting schemes made available for certain popular editors 1 . Looking up library functions, grammar compilation and running of regression tests must all take place in separate windows, where the developer frequently enters console commands for searching within source files, loading the GF interpreter, and running some test set against a compiled grammar. GF developers in fact often end up writing their own script files for performing such tasks as a batch. Any syntax errors or compiler warnings generated in the process must be manually interpreted. While some developers may actively choose this low-level approach, the number of integrated development environments (IDEs) available today indicate that there is also a big demand for advanced development setups which provide combined tools for code validation, navigation, refactoring, test suite management and more. Major IDEs such as Eclipse, Microsoft Visual Studio and Xcode have become staples for many developers who want more integrated experiences than the traditional text editor and console combination. Motivation The goal of this work is to provide powerful development tools to the GF developer community, making more efficient the work of current grammar writers as well as promoting the Grammatical Framework itself and encouraging new developers to use the framework. By building a GF development environment as a plugin to an existing IDE platform, we are able to obtain many useful code-editing features "for free". Thus rather than building generic development tools, we only need to focus on writing IDE customisations which are specific to GF, of course reducing the total effort required. The rest of this paper is laid out as follows: section 1.2 describes the design choices which guided the plugin's development, section 1.3.1 then covers each of the major features provided by the plugin, and in section 1.4 we discuss our plans for evaluation along with some future directions for the work. Design choices 1.2.1 Eclipse Eclipse 2 is a multi-language software development environment which consists of both a standalone IDE, as well as an underlying platform with an extensible plugin system. Eclipse can also be used for the development of selfcontained general purpose applications via its Rich Client Platform (RCP). The Eclipse Platform was chosen as the basis for a GF IDE for various reasons: 1. It is written in Java, meaning that the same compiled byte code can run on any platform for which there is a compatible virtual machine. This allows for maximum platform support while avoiding the effort required to maintain multiple versions of the product. The platform is fully open-source under the Eclipse Public License (EPL) 3 , is designed to be extensible and is very well documented. 3. Eclipse is a widely popular IDE and is already well-known to a number of developers within the GF community. 4. It has excellent facilities for building language development tools via the Xtext Framework (see below). Xtext Xtext 4 is an Eclipse-based framework for development of programming languages and domain specific languages (DSLs). Given a language description in the form of an EBNF grammar, it can provide all aspects of a complete language infrastructure, including a parser, linker and compiler or interpreter. These tools are completely integrated within the Eclipse IDE yet allow full customisation according to the developer's needs. Xtext can be used both for By taking the grammar for the GF syntax as specified in Ranta (2011, appendix C.6.2), and converting it into a non-left recursive (LL(*)) equivalent, we used Xtext's ANTLR 5 -based code generator to obtain a basic infrastructure for the GF programming language, including a parser and serialiser. With this infrastructure as a starting point, a number of GF-specific customisations were written in order to provide support for linking across GF's module hierarchy system. Details of this implementation as well as other custom-built IDE features are described in section 1.3.1. Design principles Preserving existing projects As users may wish to switch back and forth between a new IDE and their own traditional development setups, it was considered an important design principle to have the GF IDE not alter the developer's existing project structure. To this end, the GF Eclipse Plugin does not have any folder layout requirements, and never moves or alters a developer's files for its own purposes. For storing any IDE-specific preferences and intermediary files, meta-data directories are used which do not interfere with the original source files. Preventing application tie-in in this way reduces the investment required for users who want to switch to using the new IDE, and ensures that developers retain full control over their GF projects. This is especially important for developers using version control systems, who would want to use the plugin without risking any changes to their repository's directory tree. Interaction with GF compiler It is clear that an IDE which provides syntax checking and cross-reference resolution is in some sense replicating the parsing and linking features of that language's compiler. With this comes the decision of what should be re-implemented within the GF IDE itself, and what should be delegated to the existing GF compiler. In terms of minimising effort required, the obvious option would be to rely on the compiler as much as possible. This would conveniently mean that any future changes to the language, as implemented in updates to the compiler, would require no change to the IDE itself. However, building an IDE which depends entirely on an external program to handle all parsing and linking jobs on-the-fly is not a practical solution. Thanks to Xtext Framework's parser generator as described above, keeping all syntax checking within the IDE platform becomes a feasible option, in terms of effort required versus performance benefit. When it comes to reference resolution and linking however, it was decided that the IDE should delegate these tasks to the GF compiler in a background process (see section 1.3.4). This avoids the work of having to re-implement GF's module hierarchy system within the IDE implementation. Communication of scope information from GF back to the IDE is facilitated through a new "tags" feature in the GF compiler, as described in section 1.3.3. This delegation occurs in a on-demand fashion, where the GF compiler is called asynchronously and as needed, when changes are made to a module's header

    Playing nomic using a controlled natural language

    Get PDF
    Controlled natural languages have been used in a variety of domains, to enable information extraction and formal reasoning. One major challenge is that although the syntax is restricted to enable processing, without a similar restricted domain of application, it is typically difficult to extract useful results. In this paper we look at the development of a controlled natural language to reason about contractual clauses. The language is used to enable human players to play a variant of Nomic — a game of changing contracts, whose very nature makes it extremely challenging to mechanise. We present the controlled natural language with its implementation in the Grammatical Framework, and an underlying deontic logic used to reason about the contracts proposed by the players.peer-reviewe

    Mode of Action Studies on Nitrodiphenyl Ether Herbicides

    Full text link

    The clinically led worforcE and activity redesign (CLEAR) programme: a novel data-driven healthcare improvement methodology

    Get PDF
    Background: The NHS is facing substantial pressures to recover from the COVID-19 pandemic. Optimising workforce modelling is a fundamental component of the recovery plan. The Clinically Lead workforcE and Activity Redesign (CLEAR) programme is a unique methodology that trains clinicians to redesign services, building intrinsic capacity and capability, optimising patient care and minimising the need for costly external consultancy. This paper describes the CLEAR methodology and the evaluation of previous CLEAR projects, including the return on investment. Methods: CLEAR is a work-based learning programme that combines qualitative techniques with data analytics to build innovations and new models of care. It has four unique stages: (1) Clinical engagement- used to gather rich insights from stakeholders and clinicians. (2) Data interrogation- utilising clinical and workforce data for cohort analysis. (3) Innovation- using structured innovation methods to develop new models of care. (4) Recommendations- report writing, impact assessment and presentation of key findings to executive boards. A mixed-methods formative evaluation was carried out on completed projects, which included semi-structured interviews and surveys with CLEAR associates and stakeholders, and a health economic logic model that was developed to link the inputs, processes, outputs and the outcome of CLEAR as well as the potential impacts of the changes identified from the projects. Results: CLEAR provides a more cost-effective delivery of complex change programmes than the alternatives – resulting in a cost saving of £1.90 for every £1 spent independent of implementation success. Results suggest that CLEAR recommendations are more likely to be implemented compared to other complex healthcare interventions because of the levels of clinical engagement and have a potential return on investment of up to £14 over 5 years for every £1 invested. CLEAR appears to have a positive impact on staff retention and wellbeing, the cost of a CLEAR project is covered if one medical consultant remains in post for a year. Conclusions: The unique CLEAR methodology is a clinically effective and cost-effective complex healthcare innovation that optimises workforce and activity design, as well as improving staff retention. Embedding CLEAR methodology in the NHS could have substantial impact on patient care, staff well-being and service provision
    • …
    corecore